23 research outputs found

    GPU-driven recombination and transformation of YCoCg-R video samples

    Get PDF
    Common programmable Graphics Processing Units (GPU) are capable of more than just rendering real-time effects for games. They can also be used for image processing and the acceleration of video decoding. This paper describes an extended implementation of the H.264/AVC YCoCg-R to RGB color space transformation on the GPU. Both the color space transformation and recombination of the color samples from a nontrivial data layout are performed by the GPU. Using mid- to high-range GPUs, this extended implementation offers a significant gain in processing speed compared to an existing basic GPU version and an optimized CPU implementation. An ATI X1900 GPU was capable of processing more than 73 high-resolution 1080p YCoCg-R frames per second, which is over twice the speed of the CPU-only transformation using a Pentium D 820

    How metadata enables enriched file-based production workflows

    Get PDF
    As file-based production technology gains industry understanding and commercial products are becoming common-place, many broadcasting and production facilities are commencing re-engineering processes towards file-based production workflows. Sufficient attention, however, should also be spent on the development and incorporation of standardized metadata to reach the full potential of such file-based production environments. In addition to its initial meaning, metadata and underlying data models can represent much more than just some meta-information about audiovisual media assets. In fact, properly modeled metadata can provide the structure that holds various media assets together and guides creative people through production workflows and complex media production tasks. Metadata should hence become a first-class citizen in tomorrow's file-based production facilities The aim of this paper is to show how standardized metadata standards and data models, complemented by custom metadata developments, can be employed practically in a file-based media production environment in order to construct a coherently integrated production platform. The types of metadata are discussed that are exchanged between different parts of the system, which enables the implementation of an entire production workflow and provides seamless integration between different components

    Performance evaluation of H.264/AVC decoding and visualization using the GPU

    Get PDF
    The coding efficiency of the H.264/AVC standard makes the decoding process computationally demanding. This has limited the availability of cost-effective, high-performance solutions. Modern computers are typically equipped with powerful yet cost-effective Graphics Processing Units (GPUs) to accelerate graphics operations. These GPUs can be addressed by means of a 3-D graphics API such as Microsoft Direct3D or OpenGL, using programmable shaders as generic processing units for vector data. The new CUDA (Compute Unified Device Architecture) platform of NVIDIA provides a straightforward way to address the GPU directly, without the need for a 3-D graphics API in the middle. In CUDA, a compiler generates executable code from C code with specific modifiers that determine the execution model. This paper first presents an own-developed H.264/AVC renderer, which is capable of executing motion compensation (MC), reconstruction, and Color Space Conversion (CSC) entirely on the GPU. To steer the GPU, Direct3D combined with programmable pixel and vertex shaders is used. Next, we also present a GPU-enabled decoder utilizing the new CUDA architecture from NVIDIA. This decoder performs MC, reconstruction, and CSC on the GPU as well. Our results compare both GPU-enabled decoders, as well as a CPU-only decoder in terms of speed, complexity, and CPU requirements. Our measurements show that a significant speedup is possible, relative to a CPU-only solution. As an example, real-time playback of high-definition video (1080p) was achieved with our Direct3D and CUDA-based H.264/AVC renderers

    Establishing a customer relationship management between the broadcaster and the digital user

    Get PDF
    As the consumer is becoming digital - i.e. he has personal mobile and always internet-connected devices allowing him to create a digital footprint anytime, anywhere - new opportunities arise for the ldquoclassicrdquo broadcast industry to set up and maintain a direct relationship with their TV-viewers and radio-listeners. Until recently, a broadcaster had a one way connection with its customers, namely from the broadcaster, over the TV and radio distribution channel to the physical TV screen or radio set. Interaction was only possible after implementing and deploying expensive and hard-to-develop software on the set-top box. By employing web technology intelligently, a broadcaster can now more easily connect to its consumers and build a direct relationship. In this paper, we will discuss how to set up such a system and what the particular needs are in a broadcast context. We will use the second screen to collect data and enrich it in order to become beneficial information for the broadcasters and the advertisers

    The canonical expression of the drama product manufacturing process

    No full text
    As the broadcast industry is evolving toward IT-based facilities, the production workflows and their associated production metadata should similarly take advantage of IT commodities. This paper presents a manufacturing system for the production of drama television and motion picture programmes, constructed using IT-based technologies in a file-based media environment. This drama production facility implements a production workflow based on common industrial manufacturing processes and extensively models the individual aspects of the drama production process. We aim to show that the different processes contained in this manufacturing workflow can be expressed in terms of elementary building blocks, the canonical processes of media production. By identifying recurring and canonical functionality, process implementations can be simplified and input and output from different processes can be coordinated for better integration with external systems

    Motion Compensation and Reconstruction of H.264/AVC Video Bitstreams using the GPU

    Get PDF
    Abstract — Most modern computers are equipped with powerful yet cost-effective Graphics Processing Units (GPUs) to accelerate graphics operations. Although programmable shaders on these GPUs were designed for the creation of 3-D rendering effects, they can also be used as generic processing units for vector data. This paper proposes a hardware renderer capable of executing motion compensation, reconstruction, and visualization entirely on the GPU by the use of vertex and pixel shaders. Our measurements show that a speedup of 297 % can be achieved by relying on the processing power of the GPU, relative to the CPU. As an example, real-time playback of high-definition video (1080p) was achieved at 62.0 frames per second, consuming only 68.2 % of all CPU cycles on a modern machine. I
    corecore